Multi-agent reinforcement learning (MARL) suffers from the non-stationarity problem, which is the ever-changing targets at every iteration when multiple agents update their policies at the same time. Starting from first principle, in this paper, we manage to solve the non-stationarity problem by proposing bidirectional action-dependent Q-learning (ACE). Central to the development of ACE is the sequential decision-making process wherein only one agent is allowed to take action at one time. Within this process, each agent maximizes its value function given the actions taken by the preceding agents at the inference stage. In the learning phase, each agent minimizes the TD error that is dependent on how the subsequent agents have reacted to their chosen action. Given the design of bidirectional dependency, ACE effectively turns a multiagent MDP into a single-agent MDP. We implement the ACE framework by identifying the proper network representation to formulate the action dependency, so that the sequential decision process is computed implicitly in one forward pass. To validate ACE, we compare it with strong baselines on two MARL benchmarks. Empirical experiments demonstrate that ACE outperforms the state-of-the-art algorithms on Google Research Football and StarCraft Multi-Agent Challenge by a large margin. In particular, on SMAC tasks, ACE achieves 100% success rate on almost all the hard and super-hard maps. We further study extensive research problems regarding ACE, including extension, generalization, and practicability. Code is made available to facilitate further research.
translated by 谷歌翻译
机器人社区早已期望在混乱环境中处理物体的能力。但是,大多数作品只是专注于操纵,而不是在混乱的对象中呈现隐藏的语义信息。在这项工作中,我们介绍了在混乱的场景中进行体现探索的场景图,以解决此问题。为了在混乱的情况下验证我们的方法,我们采用操纵问题答案(MQA)任务作为我们的测试基准,该测试基准要求具有体现的机器人具有主动探索能力和视觉和语言的语义理解能力。任务,我们提出了一种模仿学习方法,以生成探索的操作。同时,采用了基于动态场景图的VQA模型来理解操纵器手腕摄像头的一系列RGB帧以及操纵的每一步,以在我们的框架中回答问题。我们提出的框架对于MQA任务有效,代表了混乱的场景中的任务。
translated by 谷歌翻译
在本文中,我们旨在改善干扰限制的无线网络中超级可靠性和低延迟通信(URLLC)的服务质量(QoS)。为了在通道连贯性时间内获得时间多样性,我们首先提出了一个随机重复方案,该方案随机将干扰能力随机。然后,我们优化了每个数据包的保留插槽数量和重复数量,以最大程度地减少QoS违规概率,该概率定义为无法实现URLLC的用户百分比。我们构建了一个级联的随机边缘图神经网络(REGNN),以表示重复方案并开发一种无模型的无监督学习方法来训练它。我们在对称场景中使用随机几何形状分析了QoS违规概率,并应用基于模型的详尽搜索(ES)方法来找到最佳解决方案。仿真结果表明,在对称方案中,通过模型学习方法和基于模型的ES方法实现的QoS违规概率几乎相同。在更一般的情况下,级联的Regnn在具有不同尺度,网络拓扑,细胞密度和频率重复使用因子的无线网络中很好地概括了。在模型不匹配的情况下,它的表现优于基于模型的ES方法。
translated by 谷歌翻译
我们设计了一个合作规划框架,为束缚机器人Duo产生最佳轨迹,该轨迹是用柔性网聚集在大面积中蔓延的散射物体。具体地,所提出的规划框架首先为每个机器人生产一组密集的航点,用作优化的初始化。接下来,我们制定迭代优化方案,以产生平滑和无碰撞的轨迹,同时确保机器人DUO内的合作,以有效地收集物体并正确避免障碍物。我们使用模型参考自适应控制器(MRAC)验证模拟中的生成轨迹,并在物理机器人中实现它们,以处理携带有效载荷的未知动态。在一系列研究中,我们发现:(i)U形成本函数在规划合作机器人DUO方面是有效的,并且(ii)任务效率并不总是与系绳网的长度成比例。鉴于环境配置,我们的框架可以衡量最佳净长度。为了我们的最佳知识,我们的最初是第一个为系列机器人二人提供此类估算。
translated by 谷歌翻译
通过嵌入式表示知识图(KGE)近年来一直是研究热点。现实知识图主要与时间相关,而大多数现有的KGE算法忽略了时间信息。一些现有方法直接或间接编码时间信息,忽略时间戳分布的平衡,这大大限制了时间知识图完成的性能(KGC)。在本文中,基于直接编码时间信息框架提出了一种时间KGC方法,并且给定的时间片被视为用于平衡时间戳分布的最优选的粒度。大量关于从现实世界提取的时间知识图形数据集的实验证明了我们方法的有效性。
translated by 谷歌翻译
There are many artificial intelligence algorithms for autonomous driving, but directly installing these algorithms on vehicles is unrealistic and expensive. At the same time, many of these algorithms need an environment to train and optimize. Simulation is a valuable and meaningful solution with training and testing functions, and it can say that simulation is a critical link in the autonomous driving world. There are also many different applications or systems of simulation from companies or academies such as SVL and Carla. These simulators flaunt that they have the closest real-world simulation, but their environment objects, such as pedestrians and other vehicles around the agent-vehicle, are already fixed programmed. They can only move along the pre-setting trajectory, or random numbers determine their movements. What is the situation when all environmental objects are also installed by Artificial Intelligence, or their behaviors are like real people or natural reactions of other drivers? This problem is a blind spot for most of the simulation applications, or these applications cannot be easy to solve this problem. The Neurorobotics Platform from the TUM team of Prof. Alois Knoll has the idea about "Engines" and "Transceiver Functions" to solve the multi-agents problem. This report will start with a little research on the Neurorobotics Platform and analyze the potential and possibility of developing a new simulator to achieve the true real-world simulation goal. Then based on the NRP-Core Platform, this initial development aims to construct an initial demo experiment. The consist of this report starts with the basic knowledge of NRP-Core and its installation, then focus on the explanation of the necessary components for a simulation experiment, at last, about the details of constructions for the autonomous driving system, which is integrated object detection and autonomous control.
translated by 谷歌翻译
Medical image segmentation methods typically rely on numerous dense annotated images for model training, which are notoriously expensive and time-consuming to collect. To alleviate this burden, weakly supervised techniques have been exploited to train segmentation models with less expensive annotations. In this paper, we propose a novel point-supervised contrastive variance method (PSCV) for medical image semantic segmentation, which only requires one pixel-point from each organ category to be annotated. The proposed method trains the base segmentation network by using a novel contrastive variance (CV) loss to exploit the unlabeled pixels and a partial cross-entropy loss on the labeled pixels. The CV loss function is designed to exploit the statistical spatial distribution properties of organs in medical images and their variance distribution map representations to enforce discriminative predictions over the unlabeled pixels. Experimental results on two standard medical image datasets demonstrate that the proposed method outperforms the state-of-the-art weakly supervised methods on point-supervised medical image semantic segmentation tasks.
translated by 谷歌翻译
Unsupervised domain adaptation reduces the reliance on data annotation in deep learning by adapting knowledge from a source to a target domain. For privacy and efficiency concerns, source-free domain adaptation extends unsupervised domain adaptation by adapting a pre-trained source model to an unlabeled target domain without accessing the source data. However, most existing source-free domain adaptation methods to date focus on the transductive setting, where the target training set is also the testing set. In this paper, we address source-free domain adaptation in the more realistic inductive setting, where the target training and testing sets are mutually exclusive. We propose a new semi-supervised fine-tuning method named Dual Moving Average Pseudo-Labeling (DMAPL) for source-free inductive domain adaptation. We first split the unlabeled training set in the target domain into a pseudo-labeled confident subset and an unlabeled less-confident subset according to the prediction confidence scores from the pre-trained source model. Then we propose a soft-label moving-average updating strategy for the unlabeled subset based on a moving-average prototypical classifier, which gradually adapts the source model towards the target domain. Experiments show that our proposed method achieves state-of-the-art performance and outperforms previous methods by large margins.
translated by 谷歌翻译
Federated learning (FL) enables the building of robust and generalizable AI models by leveraging diverse datasets from multiple collaborators without centralizing the data. We created NVIDIA FLARE as an open-source software development kit (SDK) to make it easier for data scientists to use FL in their research and real-world applications. The SDK includes solutions for state-of-the-art FL algorithms and federated machine learning approaches, which facilitate building workflows for distributed learning across enterprises and enable platform developers to create a secure, privacy-preserving offering for multiparty collaboration utilizing homomorphic encryption or differential privacy. The SDK is a lightweight, flexible, and scalable Python package, and allows researchers to bring their data science workflows implemented in any training libraries (PyTorch, TensorFlow, XGBoost, or even NumPy) and apply them in real-world FL settings. This paper introduces the key design principles of FLARE and illustrates some use cases (e.g., COVID analysis) with customizable FL workflows that implement different privacy-preserving algorithms. Code is available at https://github.com/NVIDIA/NVFlare.
translated by 谷歌翻译
时间网络已被广泛用于建模现实世界中的复杂系统,例如金融系统和电子商务系统。在时间网络中,一组节点的联合邻居通常提供至关重要的结构信息,以预测它们是否可以在一定时间相互作用。但是,最新的时间网络的表示学习方法通​​常无法提取此类信息或取决于极具耗时的特征构建方法。为了解决该问题,这项工作提出了邻里感知的时间网络模型(NAT)。对于网络中的每个节点,NAT放弃了常用的基于单个矢量的表示,同时采用了新颖的词典型邻域表示。这样的词典表示记录了一组相邻节点作为键,并可以快速构建多个节点联合邻域的结构特征。我们还设计了称为N-CACHE的专用数据结构,以支持GPU上这些字典表示的并行访问和更新。 NAT在七个现实世界大规模的时间网络上进行了评估。 NAT不仅胜过所有尖端基线的平均分别为5.9%和6.0%,分别具有换电和电感链路预测准确性,而且还可以通过对采用联合结构特征和实现的基准的加速提高4.1-76.7来保持可扩展性。对基线无法采用这些功能的基线的加速1.6-4.0。代码的链接:https://github.com/graph-com/neighborhood-aware-ware-temporal-network。
translated by 谷歌翻译